Geoffrey Hinton, one of the "deep learning triumvirate" and 2018 Turing Award winner, has revealed that he has left Google.

Born in 1947, the 76-year-old AI pioneer Hinton is a British expatriate, and in 2012, Dr. Hinton and two of his graduate students at the University of Toronto, Ilya Sutskever and Alex Krishevsky, built a convolutional neural network that won the ImageNet large-scale visual recognition challenge by a huge margin, taking deep learning into the mainstream.

But a growing number of critics have recently emphasized that aggressive development of products based on generative artificial intelligence is a race against danger. And on Monday, Hinton officially jumped on the bandwagon, according to The New York Times, which reported that Hinton said he had quit his job at Google, where he had worked for more than a decade and become one of the most respected voices in the field, so he was free to talk about the risks of artificial intelligence.

After OpenAI, a leading U.S. artificial intelligence startup, released its multimodal Grand Model GPT-4 and a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of more advanced systems because AI technology "poses profound risks to society and humanity ". A few days later, 19 current and former leaders of the 40-year-old academic group Association for the Advancement of Artificial Intelligence published their own open letter warning of the risks of artificial intelligence.

But Hinton did not sign either letter, saying he did not want to publicly criticize Google or other companies before he resigned. Last month, he expressed his willingness to resign to Google and spoke by phone on Thursday with Sundar Pichai, chief executive of Google parent Alphabet, but he did not disclose specifics.

Hinton has argued that building neural networks that learn from large amounts of digital text is a powerful way for machines to understand and generate language, but it's not as powerful as the way humans process language. His view changed last year as Google and OpenAI built systems that use large amounts of data. He still thinks these systems are inferior to the human brain in some ways, but he believes they eclipse human intelligence in others.

He believes that as companies improve their AI systems, they will become increasingly dangerous. "Look at where we were five years ago and where we are now," he said of AI technology, "accepting differences and spreading them around, which is scary."

Until last year, he said, Google had been the "proper steward" of the technology, careful not to release anything that could cause harm. But now that Microsoft has augmented its Bing search engine with chatbots to challenge Google's core business, and Google is racing to deploy the same technology, Hinton judges that the tech giants are locked in a race that may be impossible to stop.

His biggest fear is that the Internet will be flooded with fake photos, videos and text, and that ordinary people will "no longer know what's real. He also fears that artificial intelligence technology will eventually disrupt the job market. Today, chatbots like ChatGPT often complement human workers, but they could also replace paralegals, personal assistants, translators and others who handle rote tasks.

He fears that future versions of the technology could pose a threat to humans because they often learn unexpected behaviors from the vast amounts of data they analyze. This becomes a problem, in Hinton's view, because individuals and companies allow AI systems not only to generate their own computer code, but also to run that code themselves. He fears the day when truly autonomous weapons - those killer robots - become a reality.

Geoffrey Hinton, the AI legend who resigned from Google, has caused a new wave of earthquakes around the theme of "potential risk" in the global AI space. He is concerned about the irreversible negative impact of AI on humans and businesses, and is determined to stand up and raise a voice of warning at the center of the frenzied AI storm.

Hinton also called for the competition between Google and Microsoft and other companies to escalate into a global competition that will not stop without some kind of global regulation.

In his view, unlike nuclear weapons, there is no way to know if companies or countries are working on the technology in secret. The best hope is for the world's top scientists to collaborate on ways to control this technology. "I don't think they should scale it up further until they figure out if they can control it." he stressed.